Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update

Related Vulnerabilities: CVE-2020-7720   CVE-2020-8237   CVE-2020-14040   CVE-2020-15586   CVE-2020-16845   CVE-2020-14040   CVE-2020-15586   CVE-2020-16845   CVE-2020-7720   CVE-2020-8237   CVE-2018-10103   CVE-2018-10105   CVE-2018-14461   CVE-2018-14462   CVE-2018-14463   CVE-2018-14464   CVE-2018-14465   CVE-2018-14466   CVE-2018-14467   CVE-2018-14468   CVE-2018-14469   CVE-2018-14470   CVE-2018-14879   CVE-2018-14880   CVE-2018-14881   CVE-2018-14882   CVE-2018-16227   CVE-2018-16228   CVE-2018-16229   CVE-2018-16230   CVE-2018-16300   CVE-2018-16451   CVE-2018-16452   CVE-2018-20843   CVE-2019-1551   CVE-2019-5018   CVE-2019-8625   CVE-2019-8710   CVE-2019-8720   CVE-2019-8743   CVE-2019-8764   CVE-2019-8766   CVE-2019-8769   CVE-2019-8771   CVE-2019-8782   CVE-2019-8783   CVE-2019-8808   CVE-2019-8811   CVE-2019-8812   CVE-2019-8813   CVE-2019-8814   CVE-2019-8815   CVE-2019-8816   CVE-2019-8819   CVE-2019-8820   CVE-2019-8823   CVE-2019-8835   CVE-2019-8844   CVE-2019-8846   CVE-2019-11068   CVE-2019-13050   CVE-2019-13627   CVE-2019-14889   CVE-2019-15165   CVE-2019-15166   CVE-2019-15903   CVE-2019-16168   CVE-2019-16935   CVE-2019-18197   CVE-2019-18609   CVE-2019-19221   CVE-2019-19906   CVE-2019-19956   CVE-2019-20218   CVE-2019-20387   CVE-2019-20388   CVE-2019-20454   CVE-2019-20807   CVE-2019-20907   CVE-2019-20916   CVE-2020-1730   CVE-2020-1751   CVE-2020-1752   CVE-2020-3862   CVE-2020-3864   CVE-2020-3865   CVE-2020-3867   CVE-2020-3868   CVE-2020-3885   CVE-2020-3894   CVE-2020-3895   CVE-2020-3897   CVE-2020-3899   CVE-2020-3900   CVE-2020-3901   CVE-2020-3902   CVE-2020-6405   CVE-2020-7595   CVE-2020-7720   CVE-2020-8177   CVE-2020-8237   CVE-2020-8492   CVE-2020-9327   CVE-2020-9802   CVE-2020-9803   CVE-2020-9805   CVE-2020-9806   CVE-2020-9807   CVE-2020-9843   CVE-2020-9850   CVE-2020-9862   CVE-2020-9893   CVE-2020-9894   CVE-2020-9895   CVE-2020-9915   CVE-2020-9925   CVE-2020-10018   CVE-2020-10029   CVE-2020-11793   CVE-2020-13630   CVE-2020-13631   CVE-2020-13632   CVE-2020-14019   CVE-2020-14040   CVE-2020-14382   CVE-2020-14391   CVE-2020-14422   CVE-2020-15503   CVE-2020-15586   CVE-2020-16845   CVE-2020-25660  

Synopsis

Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update

Type/Severity

Security Advisory: Moderate

Topic

Updated images are now available for Red Hat OpenShift Container Storage 4.6.0 on Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements.

Security Fix(es):

  • nodejs-node-forge: prototype pollution via the util.setPath function (CVE-2020-7720)
  • nodejs-json-bigint: Prototype pollution via `__proto__` assignment could result in DoS (CVE-2020-8237)
  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)
  • golang: data race in certain net/http servers including ReverseProxy can lead to DoS (CVE-2020-15586)
  • golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs (CVE-2020-16845)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Users are directed to the Red Hat OpenShift Container Storage Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.6/html/4.6_release_notes/index

All Red Hat OpenShift Container Storage users are advised to upgrade to
these updated images.

Solution

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat OpenShift Container Storage 4 x86_64

Fixes

  • BZ - 1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume
  • BZ - 1813506 - Dockerfile not compatible with docker and buildah
  • BZ - 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup
  • BZ - 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement
  • BZ - 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance
  • BZ - 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)
  • BZ - 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node.
  • BZ - 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default
  • BZ - 1842254 - [NooBaa] Compression stats do not add up when compression id disabled
  • BZ - 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster
  • BZ - 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner
  • BZ - 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash
  • BZ - 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot
  • BZ - 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume
  • BZ - 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume
  • BZ - 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS
  • BZ - 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount
  • BZ - 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)
  • BZ - 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14)
  • BZ - 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage
  • BZ - 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards
  • BZ - 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found
  • BZ - 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining
  • BZ - 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script
  • BZ - 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases.
  • BZ - 1865938 - CSIDrivers missing in OCS 4.6
  • BZ - 1867024 - [ocs-operator] operator v4.6.0-519.ci is in Installing state
  • BZ - 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs
  • BZ - 1868060 - [External Cluster] Noobaa-default-backingstore PV in released state upon OCS 4.5 uninstall (Secret not found)
  • BZ - 1868703 - [rbd] After volume expansion, the new size is not reflected on the pod
  • BZ - 1869411 - capture full crash information from ceph
  • BZ - 1870061 - [RHEL][IBM] OCS un-install should make the devices raw
  • BZ - 1870338 - OCS 4.6 must-gather : ocs-must-gather-xxx-helper pod in ContainerCreationError (couldn't find key admin-secret)
  • BZ - 1870631 - OCS 4.6 Deployment : RGW pods went into 'CrashLoopBackOff' state on Z Platform
  • BZ - 1872119 - Updates don't work on StorageClass which will keep PV expansion disabled for upgraded cluster
  • BZ - 1872696 - [ROKS][RFE]NooBaa Configure IBM COS as default backing store
  • BZ - 1873864 - Noobaa: On an baremetal RHCOS cluster, some backingstores are stuck in PROGRESSING state with INVALID_ENDPOINT TemporaryError
  • BZ - 1874606 - CVE-2020-7720 nodejs-node-forge: prototype pollution via the util.setPath function
  • BZ - 1875476 - Change noobaa logo in the noobaa UI
  • BZ - 1877339 - Incorrect use of logr
  • BZ - 1877371 - NooBaa UI warning message on Deploy Kubernetes Pool process - typo and shown number is incorrect
  • BZ - 1878153 - OCS 4.6 must-gather: collect node information under cluster_scoped_resources/oc_output directory
  • BZ - 1878714 - [FIPS enabled] BadDigest error on file upload to noobaa bucket
  • BZ - 1878853 - [External Mode] ceph-external-cluster-details-exporter.py does not tolerate TLS enabled RGW
  • BZ - 1879008 - ocs-osd-removal job fails because it can't find admin-secret in rook-ceph-mon secret
  • BZ - 1879072 - Deployment with encryption at rest is failing to bring up OSD pods
  • BZ - 1879919 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed
  • BZ - 1880255 - Collect rbd info and subvolume info and snapshot info command output
  • BZ - 1881028 - CVE-2020-8237 nodejs-json-bigint: Prototype pollution via `__proto__` assignment could result in DoS
  • BZ - 1881071 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed
  • BZ - 1882397 - MCG decompression problem with snappy on s390x arch
  • BZ - 1883253 - CSV doesn't contain values required for UI to enable minimal deployment and cluster encryption
  • BZ - 1883398 - Update csi sidecar containers in rook
  • BZ - 1883767 - Using placement strategies in cluster-service.yaml causes ocs-operator to crash
  • BZ - 1883810 - [External mode] RGW metrics is not available after OCS upgrade from 4.5 to 4.6
  • BZ - 1883927 - Deployment with encryption at rest is failing to bring up OSD pods
  • BZ - 1885175 - Handle disappeared underlying device for encrypted OSD
  • BZ - 1885428 - panic seen in rook-ceph during uninstall - "close of closed channel"
  • BZ - 1885648 - [Tracker for https://bugzilla.redhat.com/show_bug.cgi?id=1885700] FSTYPE for localvolumeset devices shows up as ext2 after uninstall
  • BZ - 1885971 - ocs-storagecluster-cephobjectstore doesn't report true state of RGW
  • BZ - 1886308 - Default VolumeSnapshot Classes not created in External Mode
  • BZ - 1886348 - osd removal job failed with status "Error"
  • BZ - 1886551 - Clone creation failed after timeout of 5 hours of Azure platrom for 3 CephFS PVCs ( PVC sizes: 1, 25 and 100 GB)
  • BZ - 1886709 - [External] RGW storageclass disappears after upgrade from OCS 4.5 to 4.6
  • BZ - 1886859 - OCS 4.6: Uninstall stuck indefinitely if any Ceph pods are in Pending state before uninstall
  • BZ - 1886873 - [OCS 4.6 External/Internal Uninstall] - Storage Cluster deletion stuck indefinitely, "failed to delete object store", remaining users: [noobaa-ceph-objectstore-user]
  • BZ - 1888583 - [External] When deployment is attempted without specifying the monitoring-endpoint while generating JSON, the CSV is stuck in installing state
  • BZ - 1888593 - [External] Add validation for monitoring-endpoint and port in the exporter script
  • BZ - 1888614 - [External] Unreachable monitoring-endpoint used during deployment causes ocs-operator to crash
  • BZ - 1889441 - Traceback error message while running OCS 4.6 must-gather
  • BZ - 1889683 - [GSS] Noobaa Problem when setting public access to a bucket
  • BZ - 1889866 - Post node power off/on, an unused MON PVC still stays back in the cluster
  • BZ - 1890183 - [External] ocs-operator logs are filled with "failed to reconcile metrics exporter"
  • BZ - 1890638 - must-gather helper pod should be deleted after collecting ceph crash info
  • BZ - 1890971 - [External] RGW metrics are not available if anything else except 9283 is provided as the monitoring-endpoint-port
  • BZ - 1891856 - ocs-metrics-exporter pod should have tolerations for OCS taint
  • BZ - 1892206 - [GSS] Ceph image/version mismatch
  • BZ - 1892234 - clone #95 creation failed for CephFS PVC ( 10 GB PVC size) during multiple clones creation test
  • BZ - 1893624 - Must Gather is not collecting the tar file from NooBaa diagnose
  • BZ - 1893691 - OCS4.6 must_gather failes to complete in 600sec
  • BZ - 1893714 - Bad response for upload an object with encryption
  • BZ - 1895402 - Mon pods didn't get upgraded in 720 second timeout from OCS 4.5 upgrade to 4.6
  • BZ - 1896298 - [RFE] Monitoring for Namespace buckets and resources
  • BZ - 1896831 - Clone#452 for RBD PVC ( PVC size 1 GB) failed to be created for 600 secs
  • BZ - 1898521 - [CephFS] Deleting cephfsplugin pod along with app pods will make PV remain in Released state after deleting the PVC
  • BZ - 1902627 - must-gather should wait for debug pods to be in ready state
  • BZ - 1904171 - RGW Service is unavailable for a short period during upgrade to OCS 4.6

CVEs

References